To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
Representing and synthesizing novel views in real-world dynamic scenes from casual monocular videos is a long-standing problem. Existing solutions typically approach dynamic scenes by applying geometry techniques or utilizing temporal information between several adjacent frames without considering the underlying background distribution in the entire scene or the transmittance over the ray dimension, limiting their performance on static and occlusion areas. Our approach $\textbf{D}$istribution-$\textbf{D}$riven neural radiance fields offers high-quality view synthesis and a 3D solution to $\textbf{D}$etach the background from the entire $\textbf{D}$ynamic scene, which is called $\text{D}^4$NeRF. Specifically, it employs a neural representation to capture the scene distribution in the static background and a 6D-input NeRF to represent dynamic objects, respectively. Each ray sample is given an additional occlusion weight to indicate the transmittance lying in the static and dynamic components. We evaluate $\text{D}^4$NeRF on public dynamic scenes and our urban driving scenes acquired from an autonomous-driving dataset. Extensive experiments demonstrate that our approach outperforms previous methods in rendering texture details and motion areas while also producing a clean static background. Our code will be released at https://github.com/Luciferbobo/D4NeRF.
translated by 谷歌翻译
Forecasts by the European Centre for Medium-Range Weather Forecasts (ECMWF; EC for short) can provide a basis for the establishment of maritime-disaster warning systems, but they contain some systematic biases.The fifth-generation EC atmospheric reanalysis (ERA5) data have high accuracy, but are delayed by about 5 days. To overcome this issue, a spatiotemporal deep-learning method could be used for nonlinear mapping between EC and ERA5 data, which would improve the quality of EC wind forecast data in real time. In this study, we developed the Multi-Task-Double Encoder Trajectory Gated Recurrent Unit (MT-DETrajGRU) model, which uses an improved double-encoder forecaster architecture to model the spatiotemporal sequence of the U and V components of the wind field; we designed a multi-task learning loss function to correct wind speed and wind direction simultaneously using only one model. The study area was the western North Pacific (WNP), and real-time rolling bias corrections were made for 10-day wind-field forecasts released by the EC between December 2020 and November 2021, divided into four seasons. Compared with the original EC forecasts, after correction using the MT-DETrajGRU model the wind speed and wind direction biases in the four seasons were reduced by 8-11% and 9-14%, respectively. In addition, the proposed method modelled the data uniformly under different weather conditions. The correction performance under normal and typhoon conditions was comparable, indicating that the data-driven mode constructed here is robust and generalizable.
translated by 谷歌翻译
Low-dose computed tomography (CT) plays a significant role in reducing the radiation risk in clinical applications. However, lowering the radiation dose will significantly degrade the image quality. With the rapid development and wide application of deep learning, it has brought new directions for the development of low-dose CT imaging algorithms. Therefore, we propose a fully unsupervised one sample diffusion model (OSDM)in projection domain for low-dose CT reconstruction. To extract sufficient prior information from single sample, the Hankel matrix formulation is employed. Besides, the penalized weighted least-squares and total variation are introduced to achieve superior image quality. Specifically, we first train a score-based generative model on one sinogram by extracting a great number of tensors from the structural-Hankel matrix as the network input to capture prior distribution. Then, at the inference stage, the stochastic differential equation solver and data consistency step are performed iteratively to obtain the sinogram data. Finally, the final image is obtained through the filtered back-projection algorithm. The reconstructed results are approaching to the normal-dose counterparts. The results prove that OSDM is practical and effective model for reducing the artifacts and preserving the image quality.
translated by 谷歌翻译
Face Animation是计算机视觉中最热门的主题之一,在生成模型的帮助下取得了有希望的性能。但是,由于复杂的运动变形和复杂的面部细节建模,生成保留身份和光真实图像的身份仍然是一个关键的挑战。为了解决这些问题,我们提出了一个面部神经量渲染(FNEVR)网络,以充分探索在统一框架中2D运动翘曲和3D体积渲染的潜力。在FNEVR中,我们设计了一个3D面积渲染(FVR)模块,以增强图像渲染的面部细节。具体而言,我们首先使用精心设计的体系结构提取3D信息,然后引入一个正交自适应射线采样模块以进行有效的渲染。我们还设计了一个轻巧的姿势编辑器,使FNEVR能够以简单而有效的方式编辑面部姿势。广泛的实验表明,我们的FNEVR在广泛使用的说话头基准上获得了最佳的总体质量和性能。
translated by 谷歌翻译
近年来,移动机器人变得雄心勃勃,并在大规模场景中部署。作为对环境的高级理解,稀疏的骨骼图对更有效的全球计划有益。当前,现有的骨骼图生成解决方案受到了几个主要局限性,包括对不同地图表示的适应性不佳,对机器人检查轨迹的依赖和高计算开销。在本文中,我们提出了一种有效且柔性的算法,该算法生成轨迹独立的3D稀疏拓扑骨架图,捕获了自由空间的空间结构。在我们的方法中,采用了有效的射线采样和验证机制来找到独特的自由空间区域,这有助于骨架图顶点,并且在相邻的顶点作为边缘之间具有遍历性。周期形成方案还用于维持骨架图紧凑度。基准测试与最先进的作品的比较表明,我们的方法在较短的时间内生成稀疏的图形,从而提供了高质量的全球计划路径。在现实世界中进行的实验进一步验证了我们在现实情况下我们方法的能力。我们的方法将成为开源以使社区受益的开源。
translated by 谷歌翻译
最近,基于图形神经网络(GNN)的文本分类模型引起了越来越多的关注。大多数这些模型采用类似的网络范例,即使用预训练节点嵌入初始化和两层图卷积。在这项工作中,我们提出了Textrgnn,一种改进的GNN结构,它引入了剩余连接以加深卷积网络深度。我们的结构可以获得更广泛的节点接收领域,有效地抑制节点特征的过平滑。此外,我们将概率语言模型集成到图形节点嵌入的初始化中,从而可以更好地提取非图形语义信息。实验结果表明,我们的模型是一般和高效的。无论是语料库级别还是文本级别,它都可以显着提高分类准确性,并在各种文本分类数据集中实现SOTA性能。
translated by 谷歌翻译
来自视频的行动质量评估(AQA)是一个具有挑战性的愿景任务,因为视频和行动分数之间的关系很难模拟。因此,文献中已广泛研究了行动质量评估。传统上,AQA任务被视为回归问题,以了解视频和动作分数之间的底层映射。最近,由于引入标签分配学习(LDL),不确定分数分配学习(USDL)的方法取得了成功。但USDL不适用于具有连续标签的数据集,需要在培训方面进行固定的差异。在本文中,为了解决上述问题,我们进一步开发了分发自动编码器(DAE)。 DAE采用回归算法和标签分发学习(LDL)的两者。特殊地,它将视频编码为分布,并使用变分自动编码器(VAE)中的Reparameterization技巧来进行采样分数,这在视频和分数之间建立更准确的映射。同时,建造了综合损失以加速DAE的训练。进一步提出DAE-MT以在多任务数据集上处理AQA。我们在MTL-AQA和拼图数据集中评估我们的DAE方法。公共数据集上的实验结果表明,我们的方法在Spearman的秩相关下实现了最先进的:0.9449对MTL-AQA和0.73的拼图。
translated by 谷歌翻译
去中心化的国家估计是GPS贬低的地区自动空中群体系统中最基本的组成部分之一,但它仍然是一个极具挑战性的研究主题。本文提出了Omni-swarm,一种分散的全向视觉惯性-UWB状态估计系统,用于解决这一研究利基市场。为了解决可观察性,复杂的初始化,准确性不足和缺乏全球一致性的问题,我们在Omni-warm中引入了全向感知前端。它由立体宽型摄像机和超宽带传感器,视觉惯性探测器,基于多无人机地图的本地化以及视觉无人机跟踪算法组成。前端的测量值与后端的基于图的优化融合在一起。所提出的方法可实现厘米级的相对状态估计精度,同时确保空中群中的全球一致性,这是实验结果证明的。此外,在没有任何外部设备的情况下,可以在全面的无人机间碰撞方面支持,表明全旋转的潜力是自动空中群的基础。
translated by 谷歌翻译
精确预测物理性质对于发现和设计新材料至关重要。机器学习技术引起了材料科学界的重大关注,以实现大规模筛选的潜力。图表卷积神经网络(GCNN)是最成功的机器学习方法之一,因为它在描述3D结构数据时的灵活性和有效性。大多数现有的GCNN模型集中在拓扑结构上,但过度简化了三维几何结构。然而,在材料科学中,原子的3D空间分布对于确定原子状态和内部力是至关重要的。本文提出了一种具有新型卷积机制的自适应GCNN,其同时在三维空间中同时模拟所有邻的原子之间的原子相互作用。我们将拟议模型应用于预测材料特性的两个明显挑战的问题。首先是亨利在金属 - 有机框架(MOF)中的气体吸附恒定,这是众所周知的,因为它对原子配置的高敏感性。第二种是固态晶体材料中的离子电导率,这是由于少数可用于训练的标记数据而困难。新模型优于两个数据集上的现有基于图形的模型,这表明临界三维几何信息确实捕获。
translated by 谷歌翻译